Asynchronous Gossip Algorithm for Stochastic Optimization: Constant Stepsize Analysis∗
نویسندگان
چکیده
1 Electrical and Computer Engineering Department, University of Illinois at Urbana-Champaign, Urbana IL 61801, USA [email protected] 2 Industrial and Enterprise Systems Engineering Department, University of Illinois at Urbana-Champaign, Urbana IL 61801, USA [email protected] 3 Electrical and Computer Engineering Department, University of Illinois at Urbana-Champaign, Urbana IL 61801, USA [email protected]
منابع مشابه
On the Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm
We introduce a new framework for the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems. The aim is to search for local minimizers of a non-convex objective function which is supposed to be a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each ...
متن کاملDistributed Time-Varying Stochastic Optimization and Utility-Based Communication
We devise a distributed asynchronous stochastic ǫgradient-based algorithm to enable a network of computing and communicating nodes to solve a constrained discrete-time time-varying stochastic convex optimization problem. Each node updates its own decision variable only once every discrete time step. Under some assumptions (among which, strong convexity, Lipschitz continuity of the gradient, per...
متن کاملDistributed Asynchronous Algorithms with Stochastic Delays for Constrained Optimization Problems with Conditions of Time Drift
A distributed asynchronous algorithm for minimizing a function with a nonstationary minimum over a constraint set is considered. The communication delays among the processors are assumed to be stochastic with Markovian character. Conditions which guarantee the mean square and almost sure convergence to the sought solution are presented. We also present an optimal routing application for a netwo...
متن کاملGoSGD: Distributed Optimization for Deep Learning with Gossip Exchange
We address the issue of speeding up the training of convolutional neural networks by studying a distributed method adapted to stochastic gradient descent. Our parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way of sharing information between different threads based on gossip algorithms that show good consensus co...
متن کاملPrivacy Preservation in Distributed Subgradient Optimization Algorithms
In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010